1,234 research outputs found

    A method for generating realistic correlation matrices

    Get PDF
    Simulating sample correlation matrices is important in many areas of statistics. Approaches such as generating Gaussian data and finding their sample correlation matrix or generating random uniform [1,1][-1,1] deviates as pairwise correlations both have drawbacks. We develop an algorithm for adding noise, in a highly controlled manner, to general correlation matrices. In many instances, our method yields results which are superior to those obtained by simply simulating Gaussian data. Moreover, we demonstrate how our general algorithm can be tailored to a number of different correlation models. Using our results with a few different applications, we show that simulating correlation matrices can help assess statistical methodology.Comment: Published in at http://dx.doi.org/10.1214/13-AOAS638 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Synthesis of Verified Architectural Components for Critical Systems Hosted on a Verified Microkernel

    Get PDF
    We describe a method and tools for the creation of formally verified components that run on the verified seL4 microkernel. This synthesis and verification environment provides a basis to create safe and secure critical systems. The mathematically proved space and time separation properties of seL4 are particularly well-suited for the miniaturised electronics of smaller, lower-cost Unmanned Aerial Vehicles (UAVs), as multiple, independent UAV applications can be hosted on a single CPU with high assurance. We illustrate our method and tools with an example that implements security-improving transformations on system architectures captured in the Architecture Analysis and Design Language (AADL). We show how input validation filter components can be synthesized from regular expressions, and verified to meet arithmetic constraints extracted from the AADL model. Such filters comprise efficient guards on messages to/from the autonomous system. The correctness proofs for filters are automatically lifted to proofs of the corresponding properties on the lazy streams that model the communications of the generated seL4 threads. Finally, we guarantee that the intent of the autonomy application logic is accurately reflected in the application binary code hosted on seL4 through the use of the verified CakeML compiler

    Development of a Translator from LLVM to ACL2

    Full text link
    In our current work a library of formally verified software components is to be created, and assembled, using the Low-Level Virtual Machine (LLVM) intermediate form, into subsystems whose top-level assurance relies on the assurance of the individual components. We have thus undertaken a project to build a translator from LLVM to the applicative subset of Common Lisp accepted by the ACL2 theorem prover. Our translator produces executable ACL2 formal models, allowing us to both prove theorems about the translated models as well as validate those models by testing. The resulting models can be translated and certified without user intervention, even for code with loops, thanks to the use of the def::ung macro which allows us to defer the question of termination. Initial measurements of concrete execution for translated LLVM functions indicate that performance is nearly 2.4 million LLVM instructions per second on a typical laptop computer. In this paper we overview the translation process and illustrate the translator's capabilities by way of a concrete example, including both a functional correctness theorem as well as a validation test for that example.Comment: In Proceedings ACL2 2014, arXiv:1406.123

    An investigation of nucleate and film boiling heat transfer from copper spheres

    Get PDF
    This study consisted primarily of a laboratory investigation involving nucleate and film boiling heat transfer from copper spheres with saturated liquid nitrogen at atmospheric pressure. An initial study was directed towards obtaining reproducible boiling heat flux versus ΔT curves in the nucleate boiling region from surface conditions created by single and/or multiple glass bead peenings. The variables of bead size, nozzle line operating pressure, and distance a copper surface should be placed from the nozzle outlet were checked. The results indicated that a final peening of the surface with 0.0017-0.0035 inch diameter glass beads, a nozzle line pressure of approximately 20 psig and the surface being peened placed approximately 2.0 inches from the nozzle outlet produced the most desirable surface condition. Based on the conclusions of the initial surface condition study, a series of boiling heat flux versus ΔT curves for nucleate and film boiling were obtained from a 2.25 inch O.D. hollow copper sphere. The transient technique was used to obtain the necessary data for these curves from a machined, single peened, sandblasted and multiple repeened surface. The resulting boiling heat flux versus ΔT curves indicated a high degree of reproducibility from the single peened and multiple glass bead repeened spherical surface. In the film boiling region, the results indicated that surface conditions affect the minimum boiling heat flux (Liedenfrost Point) and the corresponding ΔT at which it occurs . In general, the results indicated that a peened spherical surface yields a Liedenfrost Point comparable to a polished spherical surface. A peened 0. 75 inch O.D. solid copper sphere was oscillated with peak-to-peak amplitude-to-diameter ratios (X/d) of 2.40, 5.73 and 7.33 and at frequencies from 3.0 to 10.15 cps. More than 100 percent increase in film boiling heat flux was noted over a stationary condition when the sphere was oscillated with an X/d ratio of 7. 33 and a frequency of 6.03 cps. At the two larger X/d ratios, the boiling heat flux versus ΔT curves in the film boiling region indicated steeper curve slopes. Heat flux versus ΔT curves in the nucleate and film boiling regions were obtained from peened 0.25, 0.125, and 0.0625 inch diameter solid copper spheres. The boiling heat flux versus ΔT curves obtained from the spheres in the nucleate boiling region differed from those curves obtained from larger peened and polished spherical surfaces in both the magnitude of peak nucleate boiling heat flux and critical ΔT. Accordingly, the deviation of these characteristics between the three diameter spheres indicated that the critical ΔT decreases as the sphere diameter was decreased. The film boiling heat flux for the three spheres were found to increase as sphere diameter was decreased. In addition, the correlation equation presented by Frederking and Clark [1]* for the Nusselt number was not representative of these size spheres. Utilizing the steady state technique, boiling heat flux versus ΔT curves in the nucleate and film boiling regions were obtained from a peened 2.25 inch O.D. hollow copper sphere with an enclosed heater. The resulting boiling heat flux versus ΔT curves were found to be comparable to those obtained by the transient technique from the similar 2. 25 inch O.D. hollow copper sphere. *Numbers in brackets refer to listing in the Bibliography --Abstract, pages ii-iii

    The effect of porosity distribution on the predicted mechanical response of die cast AM60B magnesium

    Get PDF
    In this paper, it is clearly shown that the distribution of the initial porosity is a critical factor in the prediction of damage evolution and initiation of failure in a cast AM60B magnesium notch Bridgeman tensile specimen. Using X-ray computed tomography, the actual initial porosity distribution was obtained, and this distribution was input into a finite element code as an initial condition. The predicted damage evolution from this simulation was compared to the damage evolution of the experimental specimen as well as other simulated porosity distributions. This study shows that the simulation of the actual porosity distribution predicted well the damage evolution observed in the experiment. It is also shown that the initial distribution of porosity plays a vital role in the predicted elongation to failure of a notched specimen. The actual distribution was shown to fail at a significantly lower strain than random or uniformly distributed damage

    Verification of a Rust Implementation of Knuth's Dancing Links using ACL2

    Full text link
    Dancing Links connotes an optimization to a circular doubly-linked list data structure implementation which provides for fast list element removal and restoration. The Dancing Links optimization is used primarily in fast algorithms to find exact covers, and has been popularized by Knuth in Volume 4B of his seminal series The Art of Computer Programming. We describe an implementation of the Dancing Links optimization in the Rust programming language, as well as its formal verification using the ACL2 theorem prover. Rust has garnered significant endorsement in the past few years as a modern, memory-safe successor to C/C++ at companies such as Amazon, Google, and Microsoft, and is being integrated into both the Linux and Windows operating system kernels. Our interest in Rust stems from its potential as a hardware/software co-assurance language, with application to critical systems. We have crafted a Rust subset, inspired by Russinoff's Restricted Algorithmic C (RAC), which we have imaginatively named Restricted Algorithmic Rust, or RAR. In previous work, we described our initial implementation of a RAR toolchain, wherein we simply transpile the RAR source into RAC. By so doing, we leverage a number of existing hardware/software co-assurance tools with a minimum investment of time and effort. In this paper, we describe the RAR Rust subset, describe our improved prototype RAR toolchain, and detail the design and verification of a circular doubly-linked list data structure employing the Dancing Links optimization in RAR, with full proofs of functional correctness accomplished using the ACL2 theorem prover.Comment: In Proceedings ACL2-2023, arXiv:2311.08373. arXiv admin note: substantial text overlap with arXiv:2205.1170

    Effect of viscoplasticity in HMX grains on ignition probability of dynamically loaded PBXs

    Get PDF
    A Lagrangian Cohesive Finite Element Method (CFEM) framework is used to study the effect of viscoplasticity in HMX grains on the mechanical response and ignition behavior of two-phase polymer bonded explosives (PBXs) subjected to dynamic loads. The CFEM framework explicitly accounts for dissipation due to viscoelastic deformation of the binder, elasto-viscoplastic deformation of the HMX grains, debonding between the binder and the HMX particles, cracking of the particles and binder, and friction and frictional heating on crack surfaces. Microstructures considered have grain volume fractions from 0.72 to 0.90. Loading at piston velocities ranging from 50 to 200 m/s is considered. To study the ignition behavior, the analyses emphasize statistical quantification with sets of statistically similar microstructures, and explicit account of ignition due to formation of hotspots. To delineate the effects of viscoplasticity and other dissipative mechanisms (debonding, fracture, and friction), calculations are carried out for the same sample sets with and without viscoplasticity. Results show that viscoplasticity in HMX particles significantly affects the ignition sensitivity of PBXs. Specifically, viscoplastic deformation decreases the minimum time required to form critical hotspots and increases the range of potential ignition times. It is also found that viscoplasticity exacerbates the difference in ignition sensitivity between samples with different fractions of HMX. Overall, the evolution of mechanical and thermal fields shows that inelasticity causes heating to concentrate in fewer hotspots with higher temperatures and larger sizes, leading to the behavior shifts observed

    Canine abortion (1993)

    Get PDF
    Revised 8/93/5M
    corecore